Data Sparse Matrix Computations - Lecture 4

نویسندگان

  • John Ryan
  • Paul Upchurch
چکیده

domain ΩT domain ΩS "well-separated" xi yi Specifically, let’s say we have domains ΩS of N source points and ΩT of M target points, and these domains are “well-separated” (we will formalize this in section 3). Our goal is to compute the influence of all source points onto target points. Let the M × N matrix [K]ij = K(xi, yj) and assume it is approximately low-rank, so that K ≈ UV T with U of size M × P and V of size N × P . If P is small then we can efficiently compute the effect of many points in the source domain on points in the target domain. This low rank assumption is the same as saying that we can represent K with a function of x only and a function of y only K(x, y) ≈ P ∑

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Data-sparse matrix computations Lecture 25: Low Rank + Sparse Matrix Recovery

In the previous lecture, we observed that it is possible to recover a sparse solution to Ax = b by solving a minimization problem involving the 1-norm. In this lecture, we consider a matrix A that can be written as A = L+ S, where L is a low rank matrix and S is a sparse matrix, and seek a method that recovers L and S. We remark that Lecture 26 forms a sequal to these notes and addresses the te...

متن کامل

Data Sparse Matrix Computations - Lecture 3 Scribe :

Convolution as a Matrix/Vector multiplication Notice that (2) can be written as g = Y x where g is a column vector with elements gk = (x ∗ y)k, x is a column vector with elements xk, and Y is an NxN matrix. By examining (2), we can deduce that the elements of the first row of the matrix Y should be Y0,: = {y0, y−1, y−2, ..., y−(N−1)} Similarly, the second row should be Y1,: = {y1, y0, y−1, ...,...

متن کامل

Data Sparse Matrix Computation - Lecture 11

2 Randomized algorithms 4 2.1 Randomized low-rank factorization . . . . . . . . . . . . . . . . . 4 2.2 How to find such a Q . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 How to construct Q with randomness . . . . . . . . . . . . . . . . 5 2.4 An adaptive randomized range finder algorithm . . . . . . . . . . 6 2.5 Example of implementation of the adaptive range approximation method . ...

متن کامل

Data - sparse matrix computations Lecture 2 : FFT - The Fast Fourier Transform Lecturer : Anil Damle

It is without a doubt, one crucial piece of the toolbox of numerical methods. The FFT or Fast Fourier Transform is a fast algorithm used to compute the Discrete Fourier Transform. It is one of the most common discrete transforms used today. Note: While many other algorithms used throughout this course can be coded up for use in other projects, you shouldn’t attempt to do this with the FFT. The ...

متن کامل

Performance comparison of data-reordering algorithms for sparse matrix-vector multiplication in edge-based unstructured grid computations

Several performance improvements for finite-element edge-based sparse matrix–vector multiplication algorithms on unstructured grids are presented and tested. Edge data structures for tetrahedral meshes and triangular interface elements are treated, focusing on nodal and edges renumbering strategies for improving processor and memory hierarchy use. Benchmark computations on Intel Itanium 2 and P...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017